Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model, trained and fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of “pure haters”, meant as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents’ community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of the number of comments and time. Our results show that, coherently with Godwin’s law, online debates tend to degenerate towards increasingly toxic exchanges of views.

Dynamics of online hate and misinformation / Cinelli, M.; Pelicon, A.; Mozetic, I.; Quattrociocchi, W.; Novak, P. K.; Zollo, F.. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 11:1(2021), p. 22083. [10.1038/s41598-021-01487-w]

Dynamics of online hate and misinformation

Cinelli M.
Primo
Membro del Collaboration Group
;
Quattrociocchi W.;
2021

Abstract

Online debates are often characterised by extreme polarisation and heated discussions among users. The presence of hate speech online is becoming increasingly problematic, making necessary the development of appropriate countermeasures. In this work, we perform hate speech detection on a corpus of more than one million comments on YouTube videos through a machine learning model, trained and fine-tuned on a large set of hand-annotated data. Our analysis shows that there is no evidence of the presence of “pure haters”, meant as active users posting exclusively hateful comments. Moreover, coherently with the echo chamber hypothesis, we find that users skewed towards one of the two categories of video channels (questionable, reliable) are more prone to use inappropriate, violent, or hateful language within their opponents’ community. Interestingly, users loyal to reliable sources use on average a more toxic language than their counterpart. Finally, we find that the overall toxicity of the discussion increases with its length, measured both in terms of the number of comments and time. Our results show that, coherently with Godwin’s law, online debates tend to degenerate towards increasingly toxic exchanges of views.
2021
Machine Learning, Hate Speech, Social Media, Polarization, Misinformation
01 Pubblicazione su rivista::01a Articolo in rivista
Dynamics of online hate and misinformation / Cinelli, M.; Pelicon, A.; Mozetic, I.; Quattrociocchi, W.; Novak, P. K.; Zollo, F.. - In: SCIENTIFIC REPORTS. - ISSN 2045-2322. - 11:1(2021), p. 22083. [10.1038/s41598-021-01487-w]
File allegati a questo prodotto
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11573/1593291
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? 11
  • Scopus 36
  • ???jsp.display-item.citation.isi??? 29
social impact